264 research outputs found

    Intermediate problems in modular circuits satisfiability

    Full text link
    In arXiv:1710.08163 a generalization of Boolean circuits to arbitrary finite algebras had been introduced and applied to sketch P versus NP-complete borderline for circuits satisfiability over algebras from congruence modular varieties. However the problem for nilpotent (which had not been shown to be NP-hard) but not supernilpotent algebras (which had been shown to be polynomial time) remained open. In this paper we provide a broad class of examples, lying in this grey area, and show that, under the Exponential Time Hypothesis and Strong Exponential Size Hypothesis (saying that Boolean circuits need exponentially many modular counting gates to produce boolean conjunctions of any arity), satisfiability over these algebras have intermediate complexity between Ω(2clogh1n)\Omega(2^{c\log^{h-1} n}) and O(2cloghn)O(2^{c\log^h n}), where hh measures how much a nilpotent algebra fails to be supernilpotent. We also sketch how these examples could be used as paradigms to fill the nilpotent versus supernilpotent gap in general. Our examples are striking in view of the natural strong connections between circuits satisfiability and Constraint Satisfaction Problem for which the dichotomy had been shown by Bulatov and Zhuk

    Probabilistic Model Counting with Short XORs

    Full text link
    The idea of counting the number of satisfying truth assignments (models) of a formula by adding random parity constraints can be traced back to the seminal work of Valiant and Vazirani, showing that NP is as easy as detecting unique solutions. While theoretically sound, the random parity constraints in that construction have the following drawback: each constraint, on average, involves half of all variables. As a result, the branching factor associated with searching for models that also satisfy the parity constraints quickly gets out of hand. In this work we prove that one can work with much shorter parity constraints and still get rigorous mathematical guarantees, especially when the number of models is large so that many constraints need to be added. Our work is based on the realization that the essential feature for random systems of parity constraints to be useful in probabilistic model counting is that the geometry of their set of solutions resembles an error-correcting code.Comment: To appear in SAT 1

    Adiabatic Quantum Computing with Phase Modulated Laser Pulses

    Full text link
    Implementation of quantum logical gates for multilevel system is demonstrated through decoherence control under the quantum adiabatic method using simple phase modulated laser pulses. We make use of selective population inversion and Hamiltonian evolution with time to achieve such goals robustly instead of the standard unitary transformation language.Comment: 19 pages, 6 figures, submitted to JOP

    Codeword stabilized quantum codes: algorithm and structure

    Full text link
    The codeword stabilized ("CWS") quantum codes formalism presents a unifying approach to both additive and nonadditive quantum error-correcting codes (arXiv:0708.1021). This formalism reduces the problem of constructing such quantum codes to finding a binary classical code correcting an error pattern induced by a graph state. Finding such a classical code can be very difficult. Here, we consider an algorithm which maps the search for CWS codes to a problem of identifying maximum cliques in a graph. While solving this problem is in general very hard, we prove three structure theorems which reduce the search space, specifying certain admissible and optimal ((n,K,d)) additive codes. In particular, we find there does not exist any ((7,3,3)) CWS code though the linear programming bound does not rule it out. The complexity of the CWS search algorithm is compared with the contrasting method introduced by Aggarwal and Calderbank (arXiv:cs/0610159).Comment: 11 pages, 1 figur

    The Computational Power of Minkowski Spacetime

    Full text link
    The Lorentzian length of a timelike curve connecting both endpoints of a classical computation is a function of the path taken through Minkowski spacetime. The associated runtime difference is due to time-dilation: the phenomenon whereby an observer finds that another's physically identical ideal clock has ticked at a different rate than their own clock. Using ideas appearing in the framework of computational complexity theory, time-dilation is quantified as an algorithmic resource by relating relativistic energy to an nnth order polynomial time reduction at the completion of an observer's journey. These results enable a comparison between the optimal quadratic \emph{Grover speedup} from quantum computing and an n=2n=2 speedup using classical computers and relativistic effects. The goal is not to propose a practical model of computation, but to probe the ultimate limits physics places on computation.Comment: 6 pages, LaTeX, feedback welcom

    Verification of Hierarchical Artifact Systems

    Get PDF
    Data-driven workflows, of which IBM's Business Artifacts are a prime exponent, have been successfully deployed in practice, adopted in industrial standards, and have spawned a rich body of research in academia, focused primarily on static analysis. The present work represents a significant advance on the problem of artifact verification, by considering a much richer and more realistic model than in previous work, incorporating core elements of IBM's successful Guard-Stage-Milestone model. In particular, the model features task hierarchy, concurrency, and richer artifact data. It also allows database key and foreign key dependencies, as well as arithmetic constraints. The results show decidability of verification and establish its complexity, making use of novel techniques including a hierarchy of Vector Addition Systems and a variant of quantifier elimination tailored to our context.Comment: Full version of the accepted PODS pape

    NP-hardness of the cluster minimization problem revisited

    Full text link
    The computational complexity of the "cluster minimization problem" is revisited [L. T. Wille and J. Vennik, J. Phys. A 18, L419 (1985)]. It is argued that the original NP-hardness proof does not apply to pairwise potentials of physical interest, such as those that depend on the geometric distance between the particles. A geometric analog of the original problem is formulated, and a new proof for such potentials is provided by polynomial time transformation from the independent set problem for unit disk graphs. Limitations of this formulation are pointed out, and new subproblems that bear more direct consequences to the numerical study of clusters are suggested.Comment: 8 pages, 2 figures, accepted to J. Phys. A: Math. and Ge

    Complete Characterization of the Ground Space Structure of Two-Body Frustration-Free Hamiltonians for Qubits

    Full text link
    The problem of finding the ground state of a frustration-free Hamiltonian carrying only two-body interactions between qubits is known to be solvable in polynomial time. It is also shown recently that, for any such Hamiltonian, there is always a ground state that is a product of single- or two-qubit states. However, it remains unclear whether the whole ground space is of any succinct structure. Here, we give a complete characterization of the ground space of any two-body frustration-free Hamiltonian of qubits. Namely, it is a span of tree tensor network states of the same tree structure. This characterization allows us to show that the problem of determining the ground state degeneracy is as hard as, but no harder than, its classical analog.Comment: 5pages, 3 figure

    Translation from Classical Two-Way Automata to Pebble Two-Way Automata

    Get PDF
    We study the relation between the standard two-way automata and more powerful devices, namely, two-way finite automata with an additional "pebble" movable along the input tape. Similarly as in the case of the classical two-way machines, it is not known whether there exists a polynomial trade-off, in the number of states, between the nondeterministic and deterministic pebble two-way automata. However, we show that these two machine models are not independent: if there exists a polynomial trade-off for the classical two-way automata, then there must also exist a polynomial trade-off for the pebble two-way automata. Thus, we have an upward collapse (or a downward separation) from the classical two-way automata to more powerful pebble automata, still staying within the class of regular languages. The same upward collapse holds for complementation of nondeterministic two-way machines. These results are obtained by showing that each pebble machine can be, by using suitable inputs, simulated by a classical two-way automaton with a linear number of states (and vice versa), despite the existing exponential blow-up between the classical and pebble two-way machines
    corecore